The ETH Zurich researchers have developed a method that makes AI answers more reliable over time. Their algorithm is highly selective in choosing data. Additionally, up to 40 times smaller AI models can achieve the same output performance as the best large AI models.
ChatGPT and similar tools often amaze us with the accuracy of their answers, but also often lead to doubt. One of the big challenges of powerful AI response machines is that they serve us with perfect answers and obvious nonsense with the same ease. One of the major challenges is how the underlying large language models (LLMs) of AI deal with uncertainty. It has been very difficult until now to judge whether LLMs focused on text processing and generation generate their answers on a solid foundation of data or whether they are on uncertain ground.
Researchers from the Institute for Machine Learning at the Department of Computer Science at ETH Zurich have now developed a method to specifically reduce the uncertainty of AI. 'Our algorithm can specifically enrich the general language model of AI with additional data from the relevant thematic area of the question. In combination with the specific question, we can then specifically retrieve those relationships from the depths of the model and from the enrichment data that are likely to generate a correct answer,' explains Jonas Hübotter from the Learning & Adaptive Systems Group, who developed the new method as part of his PhD studies.